February 23 - March 1
Draft Amnesty Week

A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more

A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Draft Amnesty
Global health
Animal welfare
Existential risk
12 more
12
Linch
4d
1
PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism. So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children. Put in a different way, mean regression goes in both directions.  This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.
11
NickLaing
3d
13
Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don't like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already. In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt' perfect.
I’m pretty confident the EA community is underdiscussing on how to prevent global AGI powered autocracy, especially if the US democracy implodes under AGI pressure. There are two key questions here: (I) How to make the US more resilient, and (ii) how can we make the world less dependent on the US democracy resilience.
The Forum should normalize public red-teaming for people considering new jobs, roles, or project ideas. If someone is seriously thinking about a position, they should feel comfortable posting the key info — org, scope, uncertainties, concerns, arguments for — and explicitly inviting others to stress-test the decision. Some of the best red-teaming I’ve gotten hasn’t come from my closest collaborators (whose takes I can often predict), but from semi-random thoughtful EAs who notice failure modes I wouldn’t have caught alone (or people think pretty differently so can instantly spot things that would have taken me longer to figure out). Right now, a lot of this only happens at EAGs or in private docs, which feels like an information bottleneck. If many thoughtful EAs are already reading the Forum, why not use it as a default venue for structured red-teaming? Public red-teaming could: * reduce unilateralist mistakes, * prevent coordination failures (I’ve almost spent serious time on things multiple people were already doing — reinventing the wheel is common and costly), Obviously there are tradeoffs — confidentiality, social risk, signaling concerns — but I’d be excited to see norms shift toward “post early, get red-teamed, iterate publicly,” rather than waiting for a handful of coffee chats.
I. Once upon a time, there was an EA named Alice. EA made a lot of sense to Alice, and she believed that some niche problems/causes were astronomically bigger than others. But she eventually decided that (1) the theories of change were confusing/suspicious and (2) there's substantial evidence that a bunch of EA work is net-negative. So she decided to become a teacher or doctor or something. II. Alice made a mistake! If she thinks that some problems/causes are astronomically bigger than others, and she's skeptical of certain approaches, she should look for better approaches, not give up on those problems/causes! For example, she could: * Find an intervention (in the great problems/causes) that she believes in, and do that * Defer to people who she really respects on the topic * Try to understand the problem and possible interventions; do strategy/prioritization/deconfusion work (for herself or maybe benefitting the whole community) * Develop relevant skills and/or save up money, and set herself up to notice if there's more clarity or great opportunities in the future * Accept sign-uncertainty and do positive-EV stuff III. This is actually about my friend Bob who's sometimes like I work on AI safety but I feel clueless about whether we're actually helping, and I see that farmed animal suffering is a huge problem, and I want to go work on farmed animal welfare. If Bob still believes that the AI stuff is astronomically more important than the animal stuff, Bob is making the same mistake as Alice!